Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

نویسندگان

  • Aryan Mokhtari
  • S. Hamed Hassani
  • Amin Karbasi
چکیده

In this paper, we study the problem of constrained and stochastic continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called Stochastic Continuous Greedy, which achieves a tight approximation guarantee. More precisely, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that Stochastic Continuous Greedy achieves a [(1 − 1/e)OPT − ] guarantee (in expectation) with O(1/ ) stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. By using stochastic continuous optimization as an interface, we also provide the first (1−1/e) tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a general matroid constraint.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Submodular Maximization: The Case of Coverage Functions

Stochastic optimization of continuous objectives is at the heart of modern machine learning. However, many important problems are of discrete nature and often involve submodular objectives. We seek to unleash the power of stochastic continuous optimization, namely stochastic gradient descent and its variants, to such discrete problems. We first introduce the problem of stochastic submodular opt...

متن کامل

Stochastic Submodular Maximization

We study stochastic submodular maximization problem with respect to a cardinality constraint. Our model can capture the effect of uncertainty in different problems, such as cascade effects in social networks, capital budgeting, sensor placement, etc. We study non-adaptive and adaptive policies and give optimal constant approximation algorithms for both cases. We also bound the adaptivity gap of...

متن کامل

Finite-time Analysis for the Knowledge-Gradient Policy

We consider sequential decision problems in which we adaptively choose one of finitely many alternatives and observe a stochastic reward. We offer a new perspective of interpreting Bayesian ranking and selection problems as adaptive stochastic multi-set maximization problems and derive the first finite-time bound of the knowledge-gradient policy for adaptive submodular objective functions. In a...

متن کامل

Submodular Mini-Batch Training in Generative Moment Matching Networks

Generative moment matching network (GMMN), which is based on the maximum mean discrepancy (MMD) measure, is a generative model for unsupervised learning, where the mini-batch stochastic gradient descent is applied for the update of parameters. In this work, instead of obtaining a mini-batch randomly, each mini-batch in the iterations is selected in a submodular way such that the most informativ...

متن کامل

Gradient Methods for Submodular Maximization

In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving utility functions in active learning and sensing, matrix approximations and network inference. Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation gu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1711.01660  شماره 

صفحات  -

تاریخ انتشار 2017